Skip to content

Conversation

@RahulC7
Copy link
Contributor

@RahulC7 RahulC7 commented Nov 19, 2025

Summary:

Context

We continue from D84284794 to add support for 16-bit activations. Note that right now, all though they support 16-bit activations already, it's only if the weights are also 16-bits. To do this, we need to change the way we template some functions.

Current Behavior

Right now, we're composing two macros together, the ET_FORALL_JARVIS_QUANTIZED_TYPES_WITH_INT16 macro:

https://www.internalfb.com/code/fbsource/[9e8c6d8466107f58aa3de1b9e4ec71c49d670a8f]/fbcode/on_device_ai/Assistant/Jarvis/min_runtime/operators/generic/operators.h?lines=22-25

and the function macro(quantized_linear chosen for example):

https://www.internalfb.com/code/fbsource/[9e8c6d8466107f58aa3de1b9e4ec71c49d670a8f]/fbcode/on_device_ai/Assistant/Jarvis/min_runtime/operators/generic/quantized_linear_out.cpp?lines=30-41

so together, it just becomes a switch statement, calling the quantized_linear function with the correct template parameter.

However, note that it assumes that both the input activations and weights are the same dtype, which is not the case.

This Diff

We fix the generic implementation by allowing there to be two generics, one for the weight and one for the input activations.

Reviewed By: hsharma35

Differential Revision: D86538176

Copilot AI review requested due to automatic review settings November 19, 2025 22:21
@pytorch-bot
Copy link

pytorch-bot bot commented Nov 19, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/15901

Note: Links to docs will display an error until the docs builds have been completed.

❌ 3 New Failures, 1 Unrelated Failure

As of commit 40d03e1 with merge base d2c011e (image):

NEW FAILURES - The following jobs have failed:

BROKEN TRUNK - The following job failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Nov 19, 2025
@meta-codesync
Copy link

meta-codesync bot commented Nov 19, 2025

@RahulC7 has exported this pull request. If you are a Meta employee, you can view the originating Diff in D86538176.

@github-actions
Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Copilot finished reviewing on behalf of RahulC7 November 19, 2025 22:25
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds support for 16-bit activations with 8-bit weights in the Cadence Quantizer for linear operations, continuing from a previous differential (D84284794). The key changes enable mixed precision quantization where activations use int16 while weights use int8, addressing the limitation where both had to use the same dtype.

  • Adds int16 activation + int8 weight support by delegating to generic implementations from Jarvis/min_runtime/operators
  • Introduces new build dependencies to support the int16 templating
  • Adds test coverage for int16 activation scenarios

Reviewed Changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 6 comments.

File Description
backends/cadence/hifi/operators/op_quantized_linear_out.cpp Adds conditional branches to handle int16 activations with int8 weights by calling generic implementations, and removes __ET_UNUSED markers from now-used parameters
backends/cadence/hifi/operators/targets.bzl Adds dependencies on Jarvis min_runtime operators to support int16 templating and removes quantized_linear_out from the main operator list
backends/cadence/hifi/operators/tests/test_op_quantized_linear_out.cpp Adds new test file with int16 activation test case for quantized_linear_out

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

using std::string_view;

class HiFiQuantizedLinearTest : public OperatorTest {
public:
Copy link

Copilot AI Nov 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The public: access specifier is redundant as there are no public members declared. Consider removing this line to improve code clarity.

Suggested change
public:

Copilot uses AI. Check for mistakes.
out_zero_point,
offset,
out
);
Copy link

Copilot AI Nov 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing return statement after handling the int16 activation case. The function will continue to execute the subsequent else if branches even after successfully handling the Short/Char case, potentially leading to incorrect behavior or error messages. Add a return; statement after line 236.

Suggested change
);
);
return;

Copilot uses AI. Check for mistakes.
out_zero_point,
offset,
out
);
Copy link

Copilot AI Nov 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing return statement after handling the int16 activation case. The function will continue to execute the subsequent else if branches even after successfully handling the Short/Char case, potentially leading to incorrect behavior or error messages. Add a return; statement after line 295.

Suggested change
);
);
return;

Copilot uses AI. Check for mistakes.
int64_t out_multiplier,
int64_t out_shift,
int64_t out_zero_point,
__ET_UNUSED const optional<Tensor>& offset,
Copy link

Copilot AI Nov 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The __ET_UNUSED marker should be removed from both the ctx and offset parameters since they are now being used in the int16 activation case (lines 284, 293). This creates an inconsistency with quantized_linear_out (line 211, 220) where these markers have already been removed.

Copilot uses AI. Check for mistakes.
define_operator(op)

# quantized_linear_out and quantized_linear_per_tensor_out needs additional dependency for int16 support
define_operator("quantized_linear_out", deps=["fbcode//on_device_ai/Assistant/Jarvis/min_runtime/operators:quantize_linear_out", "fbcode//on_device_ai/Assistant/Jarvis/min_runtime/operators:headers",])
Copy link

Copilot AI Nov 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove the trailing comma after the closing bracket on line 126. While Python allows trailing commas in lists, this is inconsistent with the style used elsewhere in the codebase and may cause issues with some linters or build tools.

Suggested change
define_operator("quantized_linear_out", deps=["fbcode//on_device_ai/Assistant/Jarvis/min_runtime/operators:quantize_linear_out", "fbcode//on_device_ai/Assistant/Jarvis/min_runtime/operators:headers",])
define_operator("quantized_linear_out", deps=["fbcode//on_device_ai/Assistant/Jarvis/min_runtime/operators:quantize_linear_out", "fbcode//on_device_ai/Assistant/Jarvis/min_runtime/operators:headers"])

Copilot uses AI. Check for mistakes.

# quantized_linear_out and quantized_linear_per_tensor_out needs additional dependency for int16 support
define_operator("quantized_linear_out", deps=["fbcode//on_device_ai/Assistant/Jarvis/min_runtime/operators:quantize_linear_out", "fbcode//on_device_ai/Assistant/Jarvis/min_runtime/operators:headers",])
define_operator("quantized_linear_per_tensor_out", deps=["fbcode//on_device_ai/Assistant/Jarvis/min_runtime/operators:quantize_linear_out", "fbcode//on_device_ai/Assistant/Jarvis/min_runtime/operators:headers",])
Copy link

Copilot AI Nov 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove the trailing comma after the closing bracket on line 127. While Python allows trailing commas in lists, this is inconsistent with the style used elsewhere in the codebase and may cause issues with some linters or build tools.

Suggested change
define_operator("quantized_linear_per_tensor_out", deps=["fbcode//on_device_ai/Assistant/Jarvis/min_runtime/operators:quantize_linear_out", "fbcode//on_device_ai/Assistant/Jarvis/min_runtime/operators:headers",])
define_operator("quantized_linear_per_tensor_out", deps=["fbcode//on_device_ai/Assistant/Jarvis/min_runtime/operators:quantize_linear_out", "fbcode//on_device_ai/Assistant/Jarvis/min_runtime/operators:headers"])

Copilot uses AI. Check for mistakes.
RahulC7 added a commit to RahulC7/executorch that referenced this pull request Nov 20, 2025
…for linear (pytorch#15901)

Summary:

# Context
We continue from D84284794 to add support for 16-bit activations. Note that right now, all though they support 16-bit activations already, it's only if the weights are also 16-bits. To do this, we need to change the way we template some functions. 


# Current Behavior
Right now, we're composing two macros together, the `ET_FORALL_JARVIS_QUANTIZED_TYPES_WITH_INT16` macro: 

https://www.internalfb.com/code/fbsource/[9e8c6d8466107f58aa3de1b9e4ec71c49d670a8f]/fbcode/on_device_ai/Assistant/Jarvis/min_runtime/operators/generic/operators.h?lines=22-25


and the function macro(`quantized_linear` chosen for example): 

https://www.internalfb.com/code/fbsource/[9e8c6d8466107f58aa3de1b9e4ec71c49d670a8f]/fbcode/on_device_ai/Assistant/Jarvis/min_runtime/operators/generic/quantized_linear_out.cpp?lines=30-41



so together, it just becomes a switch statement, calling the `quantized_linear` function with the correct template parameter. 

However, note that it assumes that both the input activations and weights are the same dtype, which is not the case. 

# This Diff
We fix the generic implementation by allowing there to be two generics, one for the weight and one for the input activations.

Reviewed By: hsharma35

Differential Revision: D86538176
RahulC7 added a commit to RahulC7/executorch that referenced this pull request Nov 20, 2025
…for linear (pytorch#15901)

Summary:

# Context
We continue from D84284794 to add support for 16-bit activations. Note that right now, all though they support 16-bit activations already, it's only if the weights are also 16-bits. To do this, we need to change the way we template some functions. 


# Current Behavior
Right now, we're composing two macros together, the `ET_FORALL_JARVIS_QUANTIZED_TYPES_WITH_INT16` macro: 

https://www.internalfb.com/code/fbsource/[9e8c6d8466107f58aa3de1b9e4ec71c49d670a8f]/fbcode/on_device_ai/Assistant/Jarvis/min_runtime/operators/generic/operators.h?lines=22-25


and the function macro(`quantized_linear` chosen for example): 

https://www.internalfb.com/code/fbsource/[9e8c6d8466107f58aa3de1b9e4ec71c49d670a8f]/fbcode/on_device_ai/Assistant/Jarvis/min_runtime/operators/generic/quantized_linear_out.cpp?lines=30-41



so together, it just becomes a switch statement, calling the `quantized_linear` function with the correct template parameter. 

However, note that it assumes that both the input activations and weights are the same dtype, which is not the case. 

# This Diff
We fix the generic implementation by allowing there to be two generics, one for the weight and one for the input activations.

Reviewed By: hsharma35

Differential Revision: D86538176
…for linear (pytorch#15901)

Summary:

# Context
We continue from D84284794 to add support for 16-bit activations. Note that right now, all though they support 16-bit activations already, it's only if the weights are also 16-bits. To do this, we need to change the way we template some functions. 


# Current Behavior
Right now, we're composing two macros together, the `ET_FORALL_JARVIS_QUANTIZED_TYPES_WITH_INT16` macro: 

https://www.internalfb.com/code/fbsource/[9e8c6d8466107f58aa3de1b9e4ec71c49d670a8f]/fbcode/on_device_ai/Assistant/Jarvis/min_runtime/operators/generic/operators.h?lines=22-25


and the function macro(`quantized_linear` chosen for example): 

https://www.internalfb.com/code/fbsource/[9e8c6d8466107f58aa3de1b9e4ec71c49d670a8f]/fbcode/on_device_ai/Assistant/Jarvis/min_runtime/operators/generic/quantized_linear_out.cpp?lines=30-41



so together, it just becomes a switch statement, calling the `quantized_linear` function with the correct template parameter. 

However, note that it assumes that both the input activations and weights are the same dtype, which is not the case. 

# This Diff
We fix the generic implementation by allowing there to be two generics, one for the weight and one for the input activations.

Reviewed By: hsharma35

Differential Revision: D86538176
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant